Convergence Properties and Stationary Points of a Perceptron Learning Algorithm
ثبت نشده
چکیده
The Perceptron i s an adaptive linear combiner that has its output quantized to one o f two possible discrete values, and i t is the basic component of multilayer, feedforward neural networks. The leastmean-square (LMS) adaptive algorithm adjusts the internal weights to train the network to perform some desired function, such as pattern recognition. In this paper, we present an analysis o f the stationary points o f a single-layer Perceptron that is based on the momentum LMS algorithm, and we illustrate some o f its convergence properties. When the input of the perceptron is a Gaussian random vector, the stationary po/nts o f the algorithm are not unique and the behavior o f the algorithm near convergence depends on the step size p and the momentum constant a.
منابع مشابه
On the convergence speed of artificial neural networks in the solving of linear systems
Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. For this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singula...
متن کاملUnspecific Reinforcement Learning in One and Two - layered Networks
The dynamics of on-line learning of a perceptron with a learning rule based on the Hebb rule with “delayed” unspecific reinforcement is studied for a special case of the feedback definition. This learning algorithm combines an associative and a reinforcement step and the relevant learning parameter λ represents the ratio of the associative to the reinforcement step. For given initial conditions...
متن کاملA TS Fuzzy Model Derived from a Typical Multi-Layer Perceptron
In this paper, we introduce a Takagi-Sugeno (TS) fuzzy model which is derived from a typical Multi-Layer Perceptron Neural Network (MLP NN). At first, it is shown that the considered MLP NN can be interpreted as a variety of TS fuzzy model. It is discussed that the utilized Membership Function (MF) in such TS fuzzy model, despite its flexible structure, has some major restrictions. After modify...
متن کاملOptimizing Non-decomposable Measures with Deep Networks
We present a class of algorithms capable of directly training deep neural networks with respect to large families of task-specific performance measures such as the F-measure and the Kullback-Leibler divergence that are structured and non-decomposable. This presents a departure from standard deep learning techniques that typically use squared or cross-entropy loss functions (that are decomposabl...
متن کاملHybrid Optimized Back propagation Learning Algorithm For Multi-layer Perceptron
Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function ...
متن کامل